Automated State-Dependent Importance Sampling for Markov Jump Processes via Sampling from the Zero-Variance Distribution

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Automated State-Dependent Importance Sampling for Markov Jump Processes via Sampling from the Zero-Variance Distribution

Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a stat...

متن کامل

Zero-Variance Importance Sampling Estimators for Markov Process Expectations

We study the structure of zero-variance importance sampling estimators for expectations of functionals of Markov processes. For a class of expectations that can be characterized as solutions to linear systems, we show that a zerovariance estimator can be constructed by using an importance distribution that preserves the Markovian nature of the underlying process. This suggests that good practic...

متن کامل

Importance Sampling for Markov Chains: Asymptotics for the Variance

In this paper, we apply the Perron-Frobenius theory for non-negative matrices to the analysis of variance asymptotics for simulations of finite state Markov chain to which importance sampling is applied. The results show that we can typically expect the variance to grow (at least) exponentially rapidly in the length of the time horizon simulated. The exponential rate constant is determined by t...

متن کامل

Fast MCMC sampling for Markov jump processes and extensions

Markov jump processes (or continuous-time Markov chains) are a simple and important class of continuous-time dynamical systems. In this paper, we tackle the problem of simulating from the posterior distribution over paths in these models, given partial and noisy observations. Our approach is an auxiliary variable Gibbs sampler, and is based on the idea of uniformization. This sets up a Markov c...

متن کامل

State-dependent importance sampling schemes via minimum cross-entropy

We present a method to obtain stateand time-dependent importance sampling estimators by repeatedly solving a minimum cross-entropy (MCE) program as the simulation progresses. This MCE-based approach lends a foundation to the natural notion to stop changing the measure when it is no longer needed. We use this method to obtain a stateand time-dependent estimator for the one-tailed probability of ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Applied Probability

سال: 2014

ISSN: 0021-9002,1475-6072

DOI: 10.1239/jap/1409932671